skip to main contentskip to main menuskip to footer Universität Bielefeld Play Search

Knowledge Representation and Machine Learning

Campus der Universität Bielefeld
© Universität Bielefeld

Research

Universität Bielefeld
© Universität Bielefeld

The research questions of the working group Knowledge Representation and Machine Learning are: How can we improve machine learning through human prior knowledge? How can we make machine learning models and decisions understandable for humans? And: How can we use machine learning to increase human knowledge? To make human knowledge machine-readable, we represent human knowledge as structured data, such as grammars and knowledge graphs. Our main field of application is education, since knowledge processing and increase is particularly central there. However, the field of education is also particularly sensitive, so we attach particular importance to methods of machine learning that are responsible, transparent and fair.

In detail, our main research areas are:

  • Educational data mining, learning analytics and machine learning for education
  • Intelligent tutoring systems
  • Machine learning on structured data (e.g. graph neural networks, autoencoder for structures, edit distances)
  • Machine learning with prior knowledge and data-sparing machine learning (e.g. few-shot learning, transfer learning)
  • Interpretable and explainable machine learning
  • Fairness in machine learning

Our current research projects are:

Intelligent Tutoring System

Funded by the Technical Faculty, Alina Deriyeva is developing an intelligent tutoring system aimed at supporting programming education at the Technical Faculty, in addition to lectures and tutorials. An initial version was already used in winter term 2023/2024 for the courses "Introduction to Machine Learning" and "Introduction to Data Mining." However, the system is not only intended to support education but also to serve as a research platform for exploring new methods of artificial intelligence in programming education.

Cognitive Support through Explainable Artificial Intelligence in Tutoring Systems

As part of subproject 3.5 of the SAIL research project, Jesper Dannath is researching strategies of explainable artificial intelligence and machine learning to support human learning. This project builds upon the Intelligent Tutoring System as a research platform.

Explaining Learner Models with Language Models (XLM)

The XLM project is part of the state research network KI:edu.nrw. Alina Deriyeva develops novel approaches to explain learner models to teachers. Learner models are learning analytics / machine learning methods to estimate the ability development of learners based on their observable behavior – e.g. which tasks they get right or wrong. The explanations are intended to enhance teachers’ insight into learning analytics and support them in making informed decisions how to support students. The explanations will also be evaluated in user studies.

Evaluating language model text detectors and RAG chatbots in education

As part of the KI-Akademie OWL, Lukas Gehring evaluates state-of-the-art detectors to distinguish between human- and LLM-written text. The novel benchmarks are intended to focus on detection in an educational context where decisions based on detections are high-stakes and edge cases are likely.

Further, Lukas Gehring investigates the potential (and limitations) of RAG chatbots in education. RAG refers to “retrieval augmented generation”, i.e. chatbots that do not answer student questions directly but refer to a data base of domain knowledge, first.

Interactive Machine Learning Lab

Dr. Adia Khalid leads the Interactive Machine Learning Lab, funded by the Technical Faculty. The laboratory conducts studies on how people interact with machine learning systems, especially when solving cognitively challenging tasks and making decisions. The central question is whether explainable machine learning methods can enhance the problem-solving and decision-making abilities of individuals.

Data-Efficient Machine Learning for Biomedical Time Series

Rui Liu is researching machine learning methods capable of analyzing biomedical time series (particularly electromyograms) with minimal training data. The primary application is for bionic prostheses, where patients are enabled to control a robotic hand prosthesis solely through the activation of arm muscle movements.

Development of a Chatbot for Student Advisory Support

Within a research project of the Technical Faculty, Liliana Sanfilippo and Jasper Matzat are working on developing an interactive advisory system (a chatbot) that is available around the clock to students and prospective students at the Technical Faculty, capable of answering common questions. The chatbot is not intended to replace student advising but serves as a complementary tool, directing individuals to personal advisory appointments when the need extends beyond simple inquiries.

Applicability and Limitations of Language Models in the Social Sciences

As part of the SAIL research consortium, Aida Kostikova is involved in researching the application and boundaries of language models in the social sciences. In particular, the research focuses on whether language models can annotate political speeches in a manner similar to how human experts would do.

External: Language Models for Vocational Education

In the context of an external doctoral program at the Educational Technology Lab of the German Research Center for Artificial Intelligence, Alonso Garibay is working on the application of language models for vocational education. Language models are intended to assist in finding and generating tailored learning content for both learners and educators.

External: Generative Explanation Methods for Images

As part of an external doctoral program at the CAIRO Research Center of the Technical University of Würzburg-Schweinfurt, Philipp Väth is developing methods in machine learning that can modify images to explain decisions made by image classifiers. The primary field of application is in medicine, where diagnostic decisions based on images, such as X-rays or tissue sections, often require explanations.

back to top